• Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
    • Biography
    • Press Kit
    • Contact
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff
Menu

Ian R. Kerr [Archive]

Street Address
City, State, Zip
Phone Number
ARCHIVED WEBSITE

Your Custom Text Here

Ian R. Kerr [Archive]

  • Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
  • About
    • Biography
    • Press Kit
    • Contact
  • Teaching
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
  • Publications
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff

The Death of the AI Author

June 14, 2019 CLTS
AI author_sm.jpg

For years, I have been hoping to collaborate with my dear friend Carys Craig, a copyright expert and rockstar Prof at Osgoode Hall professor whose work I have admired for years. In this early draft, we confront the issue of AI authorship. In a world where robots are writing movie scripts and composing music, much of the second-generation literature on AI and authorship asks whether an increasing sophistication and independence of generative code should cause us to rethink embedded assumptions about the meaning of authorship, arguing that recognizing the authored nature of AI-generated works may require a less profound doctrinal leap than has historically been suggested.

In this essay, we argue that the threshold for authorship does not depend on the evolution or state of the art in AI or robotics. Instead, we contend that the very notion of AI-authorship rests on a category mistake: it is not an error about the current or potential capacities, capabilities, intelligence or sophistication of machines; rather it is an error about the ontology of authorship.

Building on the established critique of the romantic author figure, we argue that the death of the romantic author also and equally entails the death of the AI author. We provide a theoretical account of authorship that demonstrates why claims of AI authorship do not make sense in terms of 'the realities of the world in which the problem exists.' (Samuelson, 1985) Those realities, we argue, must push us past bare doctrinal or utilitarian considerations of originality, assessed in terms of what an author must do. Instead, what they demand is an ontological consideration of what an author must be. The ontological question, we suggest, requires an account of authorship that is relational; it necessitates a vision of authorship as a dialogic and communicative act that is inherently social, with the cultivation of selfhood and social relations as the entire point of the practice. Of course, this ontological inquiry into the plausibility of AI-authorship transcends copyright law and its particular doctrinal conundrums, going to the normative core of how law should — and should not — think about robots and AI, and their role in human relations.

Download the full article.

Schrödinger’s Robot: Privacy in Uncertain States

June 3, 2019 CLTS
the-gift_sm.jpg

As robots and AIs are becoming ever-present in public and private, this piece addresses an increasingly relevant issue: can robots or AIs operating independently of human intervention or oversight diminish our privacy? Here, I consider two equal and opposite schools of thought on this issue.

On the side of the robots, we see that machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This performance track record is fueling a growing optimism that robots and AIs will one day exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status.

On the side of privacy, I consider the exact opposite: that robots and AIs are, in a legal sense, nothing. The prevailing view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law.

In this paper, I argue that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Using the framework of epistemic privacy, we can begin to understand the kind of cognizance that gives rise to diminished privacy. Because machines can act on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of awareness that definitively implicates privacy. I come to the conclusion that legal theory and doctrine will have to expand their understanding of privacy relationships to be inclusive of robots and AIs that meet these epistemic conditions. Today, an increasing number of machines possess the epistemic qualities that force us to rethink our understanding of privacy relationships between humans and robots and AIs.

Read the full article

When AIs Outperform Doctors: Confronting the challenge of a tort-induced over-reliance on machine learning

June 3, 2019 CLTS
Screen Shot 2019-06-04 at 1.18.31 PM.png

I wrote this piece in collaboration with my long-time pal and We Robot co-founder Michael Froomkin and our genius colleague in machine learning, Joëlle Pineau. In it, we observe that someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. In that context, we ask what the dominance of ML diagnostics will mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

In our view, once ML diagnosticians are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve.

Although at first doctor + machine may be more effective than either alone—because humans and ML systems might make very different kinds of mistakes—in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment as well. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are no longer easily audited or even understood by human doctors.

Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in the overall quality of care. My co-authors and I describe salient technical aspects of this scenario, particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We conclude that the appropriate revision to the standard of care requires maintaining meaningful participation by physicians the loop.

Read the full article

Robots and Artificial Intelligence in Health Care

May 31, 2019 CLTS
inspection-400x327.jpg

This chapter, written with Jason Millar and Noel Corriveau, examines the increased use and evolution of robotics and AI in the health sector as well as emerging challenges associated with this increase. Our goal is to provoke, challenge and inspire readers to think critically about imminent legal and regulatory issues in the healthcare sector.

We begin in part one by offering an overview of the current use of robots and AI in healthcare: from surgical robots, exoskeletons and prosthetics to artificial organs, pharmacy and hospital automation robots, and finally examining the role of social robots and Big Data analytics. While great technological strides are materializing, there is a corresponding need for regulatory mechanisms to follow, ensuring a simultaneous evolution of the relationship between human and machine in order to achieve the best outcome for patients. In an era of information overload, security standards and data protection and privacy laws will need to adapt and be adjusted to ensure patient safety and advocacy.

In part two, we discuss main sociotechnical considerations. First, we examine the impact of sociotechnical considerations in changing the overall medical practice as the understanding and decision-making processes evolve to include AI and robots. Next, we address how the social valence considerations and the evidenced-based paradox both raise important policy questions related to the appropriateness of delegating human tasks to machines, and the changes in the assessment of liability and negligence.

In part three, we address the legal considerations of liability for robots and AI, for physicians and for institutions using robots and AI as well as AI and robots as medical devices. Key considerations regarding negligence in medical malpractice come into play as the duty and standard of care will need to evolve amidst the emergence of technology. Legal liability will also need to evolve as we inquire into whether a physician, choosing to rely on their skills, knowledge and judgement over an AI or robot recommendation, should be being held liable and negligent. Finally, we address the legal, labour and economic implications that come into play to when assessing whether robots can be considered employees of a health care institution, and the potential opening of the door to vicarious liability. We remind our readers that institutions will also have to consider their direct duties to patients, including their duty to instruct and supervise in the case of robots.

Our overall aim is to leverage technology to provide accessible and efficient health care services, without over- or under- regulation. We believe this will be achieved, while mitigating risks, through the development of new social, legal and policy frameworks.

Read the full chapter.

The Devil Is in the Defaults

May 31, 2019 CLTS
zog_sm.jpg

This review essay explores the concept of ‘shifting defaults’ as discussed by my dear friend Mireille Hildebrandt in her truly brilliant and absolutely indispensable book: Smart Technologies and the End(s) of Law. Although even attentive readers might mistake the subject of defaults as a minor topic within her book, I argue that they are of paramount importance to Hildebrandt’s central thesis: namely, that the law’s present mode of existence is imperilled by smart technologies.

I begin by offering a taxonomy for Hildebrandt’s ‘shifting defaults’, carving them into four categories: (i) natural, (ii) technological, (iii) legal, and (iv) normative. Natural defaults, like human memory, can be shifted by a technological innovation like the written word, which augments our natural memory, reconfiguring our brains, culture and politics in the process. Technological defaults, by contrast, can be changed only with permission. I argue that their demonstrated power to influence choice, particularly when opaque to the average user, poses a significant threat to privacy, identity, autonomy, and, ultimately, many of our other normative and legal cornerstones. Legal defaults have been developed to clarify the law in the absence of a competing intention; they exist to accommodate unforeseen situations. I argue that legal defaults are regulated by courts and legislators with the aim of promoting clarity, predictability and the public good. Finally, normative defaults point to the difficulty of influencing a ‘default of usage’ once it has been established. I liken this to the philosophical notion of a ‘hardening of the categories’, whereby an established norm can be difficult to violate without breaching social standards. A comparison of legal and technological defaults reveals the latter to be especially problematic, as the authority to shift them lies entirely in hands of private actors.

Ultimately, I argue that technological defaults should be set to maximize user privacy. A legislative mandate, 'privacy by default', could protect against technology’s proven power to shift both natural and normative defaults to influence choice and undermine autonomy. I conclude by reframing Hildebrandt’s central thesis, questioning whether the Rule of Law itself, could ever be legitimately displaced by smart technologies. Careful readers who noticed my use of the word ‘legitimately’ in the preceding sentence can probably guess what my answer is.

Download the full article.

Delegation, relinquishment, and responsibility: The prospect of expert robots

May 31, 2019 CLTS
The-Midnight-hour_sm.jpg

Rapid technological development in robotics and Artificial Intelligence has given rise to a dilemma that is becoming harder to ignore. Should we continue to entrust all our decision-making to humans, fallible though they may be, or should we instead delegate decision making to robots, relinquishing control to the machines for the greater good?

This chapter, written in collaboration with my good friend and colleague Jason Millar, engages this dilemma, exploring the notion of robots as ‘experts’ rather than tools. When considered to be mere tools the true capabilities of robots may be disguised. Applying the normative pull of evidence, we argue that decision-making authority should, in some circumstances, be delegated to expert robots in cases where these robots can consistently perform better than their human counterparts. This shift in decision-making responsibility is especially important in time-sensitive situations where humans lack the capacity to process vast amounts of information, an advantage held by fast-computing expert robots like IBM’s Watson.

Here, we explore four hypothetical co-robotic cases, where we argue that expert robots ought to be granted decision-making authority even in cases of disagreement. We also address the responsibilities of robots when placed in decision-making roles, and the likely challenges we may face as a result. For example, unpredictable expert robots, acting under time pressures and without the ability to express their thinking, pose challenges for assessing liability. Overall, this chapter aims to offers a narrative of what delegating and relinquishing control to expert robots could look like, but does not assess the maintenance of human control or the trust and reliability factors that are required to make the decision to delegate.

Read the full chapter.

Older Posts →

Special thanks and much gratitude are owed to one of my favorite artists, Eric Joyner, for his permission to display a number of inspirational and thought–provoking works in the banner & background.

You can contact the Centre for Law, Technology and Society | Creative Commons Licence